<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPEW34M/45CKFQB</identifier>
		<repository>sid.inpe.br/sibgrapi/2021/09.04.19.00</repository>
		<lastupdate>2021:09.06.14.47.39 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2021/09.04.19.00.11</metadatarepository>
		<metadatalastupdate>2022:06.14.00.00.24 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2021}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI54419.2021.00041</doi>
		<citationkey>MenezesFerrPereGome:2021:BiFaFa</citationkey>
		<title>Bias and Fairness in Face Detection</title>
		<format>On-line</format>
		<year>2021</year>
		<numberoffiles>1</numberoffiles>
		<size>496 KiB</size>
		<author>Menezes, Hanna França,</author>
		<author>Ferreira, Arthur Silva Cavalcante,</author>
		<author>Pereira, Eanes Torres,</author>
		<author>Gomes, Herman Martins,</author>
		<affiliation>Universidade Federal de Campina Grande </affiliation>
		<affiliation>Universidade Federal de Campina Grande </affiliation>
		<affiliation>Universidade Federal de Campina Grande </affiliation>
		<affiliation>Universidade Federal de Campina Grande</affiliation>
		<editor>Paiva, Afonso ,</editor>
		<editor>Menotti, David ,</editor>
		<editor>Baranoski, Gladimir V. G. ,</editor>
		<editor>Proença, Hugo Pedro ,</editor>
		<editor>Junior, Antonio Lopes Apolinario ,</editor>
		<editor>Papa, João Paulo ,</editor>
		<editor>Pagliosa, Paulo ,</editor>
		<editor>dos Santos, Thiago Oliveira ,</editor>
		<editor>e Sá, Asla Medeiros ,</editor>
		<editor>da Silveira, Thiago Lopes Trugillo ,</editor>
		<editor>Brazil, Emilio Vital ,</editor>
		<editor>Ponti, Moacir A. ,</editor>
		<editor>Fernandes, Leandro A. F. ,</editor>
		<editor>Avila, Sandra,</editor>
		<e-mailaddress>hanna@copin.ufcg.edu.br</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 34 (SIBGRAPI)</conferencename>
		<conferencelocation>Gramado, RS, Brazil (virtual)</conferencelocation>
		<date>18-22 Oct. 2021</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>Bias, Fairness, Face Detection.</keywords>
		<abstract>Processing of face images is used in many areas, for example: commercial applications such as video-games; facial biometrics; facial expression recognition, etc. Face detection is a crucial step for any system that processes face images. Therefore, if there is bias or unfairness in this first step, all the processing steps that follow may be compromised. Errors in automatic face detection may be harmful to people as, for instance, in situations where a decision may limit or restrict their freedom to come and go. Therefore, it is crucial to investigate the existence of these errors caused due to bias or unfairness. In this paper, an analysis of five well-known top accuracy face detectors is performed to investigate the presence of bias and unfairness in their results. Some of the metrics used to identify the existence of bias and unfairness involved the verification of demographic parity, verification of existence of false positives and/or false negatives, rate of positive prediction, and verification of equalized odds. Data from about 365 different individuals were randomly selected from the Facebook Casual Conversations Dataset, resulting in approximately 5,500 videos, providing 550,000 frames used for face detection in the performed experiments. The obtained results show that all five face detectors presented a high risk of not detecting faces from the female gender and from people between 46 and 85 years old. Furthermore, the skin tone groups related with dark skin are the groups pointed out with highest risk of faces not being detected for four of the five evaluated face detectors. This paper points out the necessity of the research community to engage in breaking the perpetuation of injustice that may be present in datasets or machine learning models.</abstract>
		<language>en</language>
		<targetfile>103.pdf</targetfile>
		<usergroup>hanna@copin.ufcg.edu.br</usergroup>
		<visibility>shown</visibility>
		<documentstage>not transferred</documentstage>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPEW34M/45PQ3RS</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2021/11.12.11.46 4</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2021/09.04.19.00</url>
	</metadata>
</metadatalist>